3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Is Semantic SLAM Ready for Embedded Systems ? A Comparative Survey

Add code
May 18, 2025
Viaarxiv icon

Mapping Semantic Segmentation to Point Clouds Using Structure from Motion for Forest Analysis

Add code
May 15, 2025
Viaarxiv icon

APCoTTA: Continual Test-Time Adaptation for Semantic Segmentation of Airborne LiDAR Point Clouds

Add code
May 15, 2025
Viaarxiv icon

MESSI: A Multi-Elevation Semantic Segmentation Image Dataset of an Urban Environment

Add code
May 13, 2025
Viaarxiv icon

TUM2TWIN: Introducing the Large-Scale Multimodal Urban Digital Twin Benchmark Dataset

Add code
May 13, 2025
Viaarxiv icon

Improving Open-Set Semantic Segmentation in 3D Point Clouds by Conditional Channel Capacity Maximization: Preliminary Results

Add code
May 09, 2025
Viaarxiv icon

MFSeg: Efficient Multi-frame 3D Semantic Segmentation

Add code
May 07, 2025
Viaarxiv icon

3D Can Be Explored In 2D: Pseudo-Label Generation for LiDAR Point Clouds Using Sensor-Intensity-Based 2D Semantic Segmentation

Add code
May 06, 2025
Viaarxiv icon

SELECT: A Submodular Approach for Active LiDAR Semantic Segmentation

Add code
May 06, 2025
Viaarxiv icon

Estimating the Diameter at Breast Height of Trees in a Forest With a Single 360 Camera

Add code
May 06, 2025
Viaarxiv icon